11 research outputs found

    Big Data Analytics for Smart Cities: The H2020 CLASS Project

    Get PDF
    Applying big-data technologies to field applications has resulted in several new needs. First, processing data across a compute continuum spanning from cloud to edge to devices, with varying capacity, architecture etc. Second, some computations need to be made predictable (real-time response), thus supporting both data-in-motion processing and larger-scale data-at-rest processing. Last, employing an event-driven programming model that supports mixing different APIs and models, such as Map/Reduce, CEP, sequential code, etc.The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the CLASS Project (www.class-project.eu), grant agreement No. 780622.Peer ReviewedPostprint (author's final draft

    Contents

    No full text

    Using Selective Acknowledgements to Reduce the Memory Footprint of Replicated Services ⋆

    No full text
    Abstract. This paper proposes the use of Selective Acknowledgements (SACK) from clients to services as a method for reducing the memory footprint of replicated services. The paper discusses the general concept of SACK in replicated services and presents a specific implementation of SACK for an existing replication infrastructure. Performance measurements exhibiting the effectiveness of SACK are also presented.

    A Case for Efficient Portable Serialization in CORBA (Preliminary Version)

    No full text
    This technical note reports on the dramatic improvement in performance that was obtained in FTS, a CORBA replication service, by simply replacing the standard portable serialization and de-serialization mechanism with an optimized one that is still portable. The surprising result is that although serialization and de-serialization only accounts for a marginal fraction of the latency in serving clients ’ requests, both the throughput and the overall latency gains were dramatic. This paper analyzes the default mechanism vs. the optimized one, and explains this counter-intuitive result. Interestingly, the lessons from this work are valid for any multi-threaded server based system.

    Distributed Wisdom Analyzing Distributed-System Performance: Latency vs. Throughput

    No full text
    Many textbooks and articles have discussed the fact that latency and throughput aren ’t opposites. Consider the well- known comparison of the throughput of a modern cargo ship packed with tapes on a two- week journey with the bandwidth in today ’ s fastest networks. The cargo ship wins big- time. Clearly, if you wish to send a small packet, the Internet is a better option. However, for transferring a very large database, low- tech options would prove faster. Technical people often forget this obvious observation. For example, in the summer of 1997, one of us attended a talk by a founder and the CTO of a leading search engine. He explained that his company ’ s Web site was hosted in California with a mirror on the East Coast. His search engine updated its content twice a week, because the company wants to keep the main site and its mirror synchronized, and copying the entire database over the Internet took 72 hours. An attendee asked, “In this case, why don ’t you store it on a tape and send it with overnight delivery? ” The speaker paused for a few seconds and replied, “Hmm, that ’ s a good point. We never thought about it.” You might be asking yourself what this anecdote has to do with modern distributed systems. The answer is that the distributed- systems research community still often ignores the fact that as long as the latency is reasonable, throughput is really what matters. In particular, research papers often brea

    Development of a Concrete Unit Cell

    No full text

    A Case for Efficient Portable Serialization in CORBA

    No full text
    This technical note reports on the dramatic improvement in performance that was obtained in FTS, a CORBA replication service, by simply replacing the standard portable serialization and de-serialization mechanism with an optimized one that is still portable. The surprising result is that although serialization and de-serialization only accounts for a marginal fraction of the latency in serving clients' requests, both the throughput and the overall latency gains were dramatic. This paper analyzes the default mechanism vs. the optimized one, and explains this counter-intuitive result. Interestingly, the lessons from this work are valid for any multi-threaded server based system

    Big Data Analytics for Smart Cities: The H2020 CLASS Project

    No full text
    Applying big-data technologies to field applications has resulted in several new needs. First, processing data across a compute continuum spanning from cloud to edge to devices, with varying capacity, architecture etc. Second, some computations need to be made predictable (real-time response), thus supporting both data-in-motion processing and larger-scale data-at-rest processing. Last, employing an event-driven programming model that supports mixing different APIs and models, such as Map/Reduce, CEP, sequential code, etc.The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the CLASS Project (www.class-project.eu), grant agreement No. 780622.Peer Reviewe
    corecore